Goto

Collaborating Authors

 synthetic content


The Chatbots Appear to Be Organizing

The Atlantic - Technology

Moltbook is the chaotic future of the internet. The first signs of the apocalypse might look a little like Moltbook: a new social-media platform, launched last week, that is supposed to be populated exclusively by AI bots--1.6 million of them and counting say hello, post software ideas, and exhort other AIs to "stop worshiping biological containers that will rot away." Moltbook was developed as a sort of experimental playground for interactions among AI "agents," which are bots that have access to and can use programs. Claude Code, a popular AI coding tool, has such agentic capabilities, for example: It can act on your behalf to manage files on your computer, send emails, develop and publish apps, and so on. Normally, humans direct an agent to perform specific tasks.


As Good as a Coin Toss: Human Detection of AI-Generated Content

Communications of the ACM

Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. With only a 50-50 chance of detecting synthetic media online, users are more vulnerable than ever to being duped. Advances in generative AI technology have made it easier than ever for anyone to manufacture increasingly realistic synthetic media (colloquially known as deepfakes) at faster speeds, larger scales, and with more customization than ever. This in turn has led to synthetic media increasingly being used for harmful purposes, including disinformation campaigns, nonconsensual pornography, financial fraud, child sexual abuse and exploitation, and espionage. As of today, the principal defense to combat deceptive synthetic media depends in large part on the human observer's perceptual detection capabilities--their ability to visually or auditorily identify AI-generated content when they encounter it. Yet the growing realism of synthetic media impedes this ability, heightening people's vulnerability to weaponized synthetic content. Moreover, people overestimate how capable they are at identifying synthetic media, further exacerbating the problem. As synthetic media continues to advance in sophistication, so too does the threat posed by its growing weaponization, from financial fraud to the production of nonconsensual intimate materials of adults and children.



Google prohibits ads promoting websites and apps that generate deepfake porn

Engadget

Google has updated its Inappropriate Content Policy to include language that expressly prohibits advertisers from promoting websites and services that generate deepfake pornography. While the company already has strong restrictions in place for ads that feature certain types of sexual content, this update leaves no doubt that promoting "synthetic content that has been altered or generated to be sexually explicit or contain nudity" is in violation of its rules. Any advertiser promoting sites or apps that generate deepfake porn, that show instructions on how to create deepfake porn and that endorse or compare various deepfake porn services will be suspended without warning. They will no longer be able to publish their ads on Google, as well. The company will start implementing this rule on May 30 and is giving advertisers the chance to remove any ad in violation of the new policy.


As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli

Cooke, Di, Edwards, Abigail, Barkoff, Sophia, Kelly, Kathryn

arXiv.org Artificial Intelligence

As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.


YouTube lays out new rules for 'realistic' AI-generated videos

Engadget

Many companies and platforms are wrangling with how to handle AI-generated content as it becomes more prevalent. One key concern for many is the labeling of such material to make it clear that an AI model whipped up a photo, video or piece of audio. To that end, YouTube has laid out its new rules for labeling videos made with artificial intelligence. Starting today, the platform will require anyone uploading a realistic-looking video that "is made with altered or synthetic media, including generative AI" to label it for the sake of transparency. YouTube defines realistic content as anything that a viewer could "easily mistake" for an actual person, event or place.


Google, Apple, Meta and other huge tech companies join US consortium to advance responsible AI

Engadget

A whole bunch of big tech companies, 200 in all, have joined a US-based effort to advance responsible AI practices. The US AI Safety Institute Consortium (AISIC) will count Meta, Google, Microsoft and Apple as members. Commerce Secretary Gina Raimondo just announced the group's numerous new members and said that they'll be tasked with carrying out actions indicated by President Biden's sweeping executive order on artificial intelligence. "The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement. Biden's October executive order was far-reaching, so this consortium will focus on developing guidelines for "red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."


Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites

Hanley, Hans W. A., Durumeric, Zakir

arXiv.org Artificial Intelligence

As large language models (LLMs) like ChatGPT have gained traction, an increasing number of news websites have begun utilizing them to generate articles. However, not only can these language models produce factually inaccurate articles on reputable websites but disreputable news sites can utilize LLMs to mass produce misinformation. To begin to understand this phenomenon, we present one of the first large-scale studies of the prevalence of synthetic articles within online news media. To do this, we train a DeBERTa-based synthetic news detector and classify over 15.90 million articles from 3,074 misinformation and mainstream news websites. We find that between January 1, 2022, and May 1, 2023, the relative number of synthetic news articles increased by 55.4% on mainstream websites while increasing by 457% on misinformation sites. We find that this increase is largely driven by smaller less popular websites. Analyzing the impact of the release of ChatGPT using an interrupted-time-series, we show that while its release resulted in a marked increase in synthetic articles on small sites as well as misinformation news websites, there was not a corresponding increase on large mainstream news websites.


How to Navigate an Era of Disruption, Disinformation, and Division

TIME - Tech

Recent years have heralded a particularly disruptive period in human history. Against the backdrop of a warming planet and the spillover effects of the COVID-19 pandemic, we face some of the most challenging economic and geopolitical conditions in decades. And things may only deteriorate from here. These challenges are detailed at length in the World Economic Forum's Global Risks Report 2024, released this week. The report, based on the views of nearly 1,500 global risks experts, policy-makers, and industry leaders, finds that the world's top three risks over the next two years are false information, extreme weather, and societal polarization.


Dall-E 2, ChatGPT to Push AI Into the Forefront of 2023

#artificialintelligence

After years in which artificial intelligence-generated content was known more for its comic absurdity--only occasionally drifting into disconcerting realism--2022 was the year that generative AI finally graduated into a full-fledged creative force. A host of realistic image generators led by research group OpenAI's Dall-E 2 made it easy for anyone to create lifelike visuals with a simple text prompt. Meanwhile, OpenAI's ChatGPT put a conversational interface on the organization's state-of-the-art text generation system, allowing users to simply instruct a machine what to write and receive a detailed and rhetorically sound--if not always factually correct--passage in seconds. These new systems, trained on datasets that span hundreds of millions of images and pages of text, respectively, have already led to widespread experimentation among brands, agencies, burgeoning startups and creative tool integrations. But experts say 2023 will be the year that brand marketers and agencies start to get serious about how synthetic content of this sort can actually be deployed to serve bottom lines and augment human creativity.